A Comparative Evaluation of Combiner and Stacked Generalization

نویسندگان

  • David W. Fan
  • Philip K. Chan
  • Salvatore J. Stolfo
چکیده

Combiner and Stacked Generalization are two very similar meta-learning methods that combine predictions of multiple classiiers to improve accuracy of any single classiier. In this paper, we compare stacked generalization and combiner from the perspective of training eeciency versus accuracy. We show that both methods improve the accuracy of any single classiier roughly at an equivalent level. Moreover, we also see that the cost of stacked generalization is very large and may prevent it from being used on very large data sets.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Comparison between Combiner and Stacked Generalization

Combiner and Stacked Generalization are two very similar meta-learning methods that combine predictions of multiple classiiers to improve accuracy of any single classiier. Both methods form a meta-level classiier from meta-data that are predictions of multiple classiiers on the same data items. The diierence between these two approaches lies in the way meta-data is formed. In this paper, we com...

متن کامل

On Deriving the Second-Stage Training Set for Trainable Combiners

Unlike fixed combining rules, the trainable combiner is applicable to ensembles of diverse base classifier architectures with incomparable outputs. The trainable combiner, however, requires the additional step of deriving a second-stage training dataset from the base classifier outputs. Although several strategies have been devised, it is thus far unclear which is superior for a given situation...

متن کامل

Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the...

متن کامل

استفاده از یادگیری همبستگی منفی در بهبود کارایی ترکیب شبکه های عصبی

This paper investigates the effect of diversity caused by Negative Correlation Learning(NCL) in the combination of neural classifiers and presents an efficient way to improve combining performance. Decision Templates and Averaging, as two non-trainable combining methods and Stacked Generalization as a trainable combiner are investigated in our experiments . Utilizing NCL for diversifying the ba...

متن کامل

An Ensemble Classification Model for the Diagnosis of Breast Cancer Using Stacked Generalization

Introduction: Breast cancer is one of the most common types of cancer whose incidence has increased dramatically in recent years. In order to diagnose this disease, many parameters must be taken into consideration and mistakes are possible due to human errors or environmental factors. For this reason, in recent decades, Artificial Intelligence has been used by medical practitioners to diagnose ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996